4 research outputs found
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
Light field imaging extends the traditional photography by capturing both
spatial and angular distribution of light, which enables new capabilities,
including post-capture refocusing, post-capture aperture control, and depth
estimation from a single shot. Micro-lens array (MLA) based light field cameras
offer a cost-effective approach to capture light field. A major drawback of MLA
based light field cameras is low spatial resolution, which is due to the fact
that a single image sensor is shared to capture both spatial and angular
information. In this paper, we present a learning based light field enhancement
approach. Both spatial and angular resolution of captured light field is
enhanced using convolutional neural networks. The proposed method is tested
with real light field data captured with a Lytro light field camera, clearly
demonstrating spatial and angular resolution improvement
Light-field view synthesis using convolutional block attention module
Consumer light-field (LF) cameras suffer from a low or limited resolution
because of the angular-spatial trade-off. To alleviate this drawback, we
propose a novel learning-based approach utilizing attention mechanism to
synthesize novel views of a light-field image using a sparse set of input views
(i.e., 4 corner views) from a camera array. In the proposed method, we divide
the process into three stages, stereo-feature extraction, disparity estimation,
and final image refinement. We use three sequential convolutional neural
networks for each stage. A residual convolutional block attention module (CBAM)
is employed for final adaptive image refinement. Attention modules are helpful
in learning and focusing more on the important features of the image and are
thus sequentially applied in the channel and spatial dimensions. Experimental
results show the robustness of the proposed method. Our proposed network
outperforms the state-of-the-art learning-based light-field view synthesis
methods on two challenging real-world datasets by 0.5 dB on average.
Furthermore, we provide an ablation study to substantiate our findings